成人小说亚洲一区二区三区,亚洲国产精品一区二区三区,国产精品成人精品久久久,久久综合一区二区三区,精品无码av一区二区,国产一级a毛一级a看免费视频,欧洲uv免费在线区一二区,亚洲国产欧美中日韩成人综合视频,国产熟女一区二区三区五月婷小说,亚洲一区波多野结衣在线

首頁 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

“審慎”是人工智能成功的關(guān)鍵嗎?

Jeremy Kahn
2020-09-26

根據(jù)一位專家的觀點(diǎn),所有的人工智能應(yīng)用都應(yīng)該以三個(gè)基本支柱為基礎(chǔ)。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

今年4月,當(dāng)新冠疫情在世界許多地方達(dá)到頂峰時(shí),我采訪了Pactera Edge公司的首席人工智能官艾哈邁爾·伊納姆。當(dāng)時(shí),在這家位于華盛頓州雷德蒙德市的科技咨詢公司,伊納姆正潛心研究新冠疫情是如何讓基于歷史數(shù)據(jù)訓(xùn)練的人工智能模型陷于動(dòng)蕩的。

上周,我再次采訪了伊納姆。他最近一直在思考為什么人工智能項(xiàng)目經(jīng)常失敗,尤其是在大型組織中。在伊納姆看來,這個(gè)問題(以及圍繞這種技術(shù)的許多其他問題)的答案,是一種叫做“審慎的人工智能”(Mindful A.I.)的東西。

“審慎就是有意識(shí)?!币良{姆說,“審慎的人工智能是指有目的地意識(shí)到人工智能體驗(yàn)的意圖,以及我們希望通過這種體驗(yàn)喚起的情感。”

好吧,我承認(rèn),當(dāng)他說這句話的時(shí)候,我覺得,這聽起來有點(diǎn)不切實(shí)際,就像伊納姆應(yīng)該一兩個(gè)星期不吃東西似的。此外,“審慎的人工智能”有種噱頭式口號(hào)的感覺。但伊納姆越是解釋他的意思,我就越覺得他言之有理。(需要澄清的是,“審慎的人工智能”并不是伊納姆創(chuàng)造的。這個(gè)術(shù)語的創(chuàng)建主要應(yīng)該歸功于微軟公司的首席創(chuàng)意總監(jiān)奧維塔·桑普森和加州大學(xué)伯克利分校教授吳德愷。)

伊納姆主張,對(duì)人工智能的使用應(yīng)該遵循第一原理。他說,組織經(jīng)常出錯(cuò),是因?yàn)樗鼈儾捎萌斯ぶ悄艿脑蚨际清e(cuò)的:要么是因?yàn)樽罡吖芾韺渝e(cuò)誤地認(rèn)為這是一種高科技靈丹妙藥,足以解決業(yè)務(wù)中存在的根本問題;要么是因?yàn)楣酒惹邢M鳒p成本;要么是因?yàn)樗鼈兟犝f競爭對(duì)手都在使用人工智能,害怕被甩在后面。伊納姆說,這些因素本身都不是采用這項(xiàng)技術(shù)的好理由。

相反,根據(jù)伊納姆的觀點(diǎn),所有的人工智能應(yīng)用都應(yīng)該以三個(gè)基本支柱為基礎(chǔ):

? 首先,它應(yīng)該“以人為中心”。這意味著不僅要認(rèn)真思考人工智能系統(tǒng)要解決的人類挑戰(zhàn)是什么,還要非常認(rèn)真地思考這項(xiàng)技術(shù)會(huì)對(duì)使用它的人(比如公司員工),以及那些受到軟件產(chǎn)出影響的人(比如客戶)帶來什么影響。

? 第二,人工智能必須值得信賴。這個(gè)支柱包含可解釋性等概念,但它還關(guān)注企業(yè)中的所有利益相關(guān)者是否會(huì)相信該系統(tǒng)能夠生成好的產(chǎn)出。

? 第三,人工智能必須符合倫理。這意味著要仔細(xì)審查用來訓(xùn)練人工智能系統(tǒng)的數(shù)據(jù)來自何處,這些數(shù)據(jù)存在哪些偏差。它還意味著要認(rèn)真考慮如何使用這項(xiàng)技術(shù):例如,如果使用一種面部識(shí)別算法來強(qiáng)化一種有偏見的警務(wù)策略,那么即使它再完美,可能也是不道德的。“這意味著要審慎地意識(shí)到人類自身的歷史,以及各種有意或無意的偏見?!币良{姆說。

對(duì)人工智能的使用采取審慎的態(tài)度,往往會(huì)使企業(yè)不再采用許多技術(shù)供應(yīng)商提供的現(xiàn)成解決方案和預(yù)先訓(xùn)練好的人工智能模型。對(duì)于預(yù)先訓(xùn)練好的人工智能模型,我們很難足夠深入地了解這套系統(tǒng)的關(guān)鍵要素——它究竟使用了什么數(shù)據(jù)、數(shù)據(jù)來自哪里、數(shù)據(jù)可能會(huì)帶來什么偏見或倫理問題。同樣重要的是,企業(yè)很難確切地發(fā)現(xiàn)這種人工智能模式可能會(huì)失敗的地方和方式。

我最喜歡的例子是IBM的“面孔多樣性”數(shù)據(jù)集。它的出發(fā)點(diǎn)是好的:太多用于建立面部識(shí)別系統(tǒng)的公共面部數(shù)據(jù)集沒有足夠的黑人或拉丁裔圖像。而在這些系統(tǒng)中發(fā)現(xiàn)的注釋也往往會(huì)強(qiáng)化種族和性別刻板印象。為了解決這個(gè)問題,IBM在2019年1月發(fā)布了一個(gè)包含100萬張人臉的開源數(shù)據(jù)集。如此一來,這個(gè)數(shù)據(jù)集應(yīng)該更加多樣化,其問題標(biāo)簽也應(yīng)該少得多吧。

聽起來都不錯(cuò),對(duì)吧?哪家公司不愿意使用這種更加多樣化的數(shù)據(jù)集來訓(xùn)練自己的面部識(shí)別系統(tǒng)呢?好吧,但有一個(gè)問題:IBM是在未經(jīng)允許的情況下,從人們的Flickr賬戶中抓取圖片來創(chuàng)建這個(gè)數(shù)據(jù)集的。因此,盲目采用這個(gè)新數(shù)據(jù)集的用戶確實(shí)避免了一個(gè)人工智能倫理問題,卻在無意中陷入了另一個(gè)問題。

遵循伊納姆所說的三大支柱,還會(huì)產(chǎn)生另一個(gè)結(jié)果:人工智能項(xiàng)目不能操之過急。運(yùn)行一個(gè)以人為中心的設(shè)計(jì)流程,并想清楚所有與可信賴性和倫理相關(guān)的潛在問題,是需要時(shí)間的。但好消息是,伊納姆說,由此生成的系統(tǒng)遠(yuǎn)比快速投入生產(chǎn)的系統(tǒng)更有可能真正地實(shí)現(xiàn)其目標(biāo)。

伊納姆指出,為了滿足這三大支柱,必須讓持有不同觀點(diǎn)的人參與進(jìn)來——參與者不僅要有不同的種族、性別和個(gè)人背景,還要讓組織內(nèi)的不同職能部門參與其中。“這必須是一個(gè)跨學(xué)科的群體。”他說。

很多時(shí)候,構(gòu)建人工智能軟件的團(tuán)隊(duì)都嚴(yán)重缺乏這種多樣性。相反,管理層只是簡單地要求工程部門構(gòu)建一款能夠滿足某種商業(yè)目的的人工智能工具,而在人工智能系統(tǒng)的概念化和測(cè)試階段,公司的其他部門幾乎沒有任何投入。沒有多樣化的團(tuán)隊(duì),就很難搞清楚該問什么問題——無論是關(guān)于算法偏見,還是關(guān)于法律和監(jiān)管問題——更遑論你是否獲得了好的答案。

就在伊納姆向我解釋的時(shí)候,我突然想起了那句古老的格言:“戰(zhàn)爭太重要了,不能完全委之于將軍?!编牛聦?shí)證明,人工智能太重要了,不能完全委之于工程師。(財(cái)富中文網(wǎng))

譯者:任文科

今年4月,當(dāng)新冠疫情在世界許多地方達(dá)到頂峰時(shí),我采訪了Pactera Edge公司的首席人工智能官艾哈邁爾·伊納姆。當(dāng)時(shí),在這家位于華盛頓州雷德蒙德市的科技咨詢公司,伊納姆正潛心研究新冠疫情是如何讓基于歷史數(shù)據(jù)訓(xùn)練的人工智能模型陷于動(dòng)蕩的。

上周,我再次采訪了伊納姆。他最近一直在思考為什么人工智能項(xiàng)目經(jīng)常失敗,尤其是在大型組織中。在伊納姆看來,這個(gè)問題(以及圍繞這種技術(shù)的許多其他問題)的答案,是一種叫做“審慎的人工智能”(Mindful A.I.)的東西。

“審慎就是有意識(shí)。”伊納姆說,“審慎的人工智能是指有目的地意識(shí)到人工智能體驗(yàn)的意圖,以及我們希望通過這種體驗(yàn)喚起的情感?!?/p>

好吧,我承認(rèn),當(dāng)他說這句話的時(shí)候,我覺得,這聽起來有點(diǎn)不切實(shí)際,就像伊納姆應(yīng)該一兩個(gè)星期不吃東西似的。此外,“審慎的人工智能”有種噱頭式口號(hào)的感覺。但伊納姆越是解釋他的意思,我就越覺得他言之有理。(需要澄清的是,“審慎的人工智能”并不是伊納姆創(chuàng)造的。這個(gè)術(shù)語的創(chuàng)建主要應(yīng)該歸功于微軟公司的首席創(chuàng)意總監(jiān)奧維塔·桑普森和加州大學(xué)伯克利分校教授吳德愷。)

伊納姆主張,對(duì)人工智能的使用應(yīng)該遵循第一原理。他說,組織經(jīng)常出錯(cuò),是因?yàn)樗鼈儾捎萌斯ぶ悄艿脑蚨际清e(cuò)的:要么是因?yàn)樽罡吖芾韺渝e(cuò)誤地認(rèn)為這是一種高科技靈丹妙藥,足以解決業(yè)務(wù)中存在的根本問題;要么是因?yàn)楣酒惹邢M鳒p成本;要么是因?yàn)樗鼈兟犝f競爭對(duì)手都在使用人工智能,害怕被甩在后面。伊納姆說,這些因素本身都不是采用這項(xiàng)技術(shù)的好理由。

相反,根據(jù)伊納姆的觀點(diǎn),所有的人工智能應(yīng)用都應(yīng)該以三個(gè)基本支柱為基礎(chǔ):

? 首先,它應(yīng)該“以人為中心”。這意味著不僅要認(rèn)真思考人工智能系統(tǒng)要解決的人類挑戰(zhàn)是什么,還要非常認(rèn)真地思考這項(xiàng)技術(shù)會(huì)對(duì)使用它的人(比如公司員工),以及那些受到軟件產(chǎn)出影響的人(比如客戶)帶來什么影響。

? 第二,人工智能必須值得信賴。這個(gè)支柱包含可解釋性等概念,但它還關(guān)注企業(yè)中的所有利益相關(guān)者是否會(huì)相信該系統(tǒng)能夠生成好的產(chǎn)出。

? 第三,人工智能必須符合倫理。這意味著要仔細(xì)審查用來訓(xùn)練人工智能系統(tǒng)的數(shù)據(jù)來自何處,這些數(shù)據(jù)存在哪些偏差。它還意味著要認(rèn)真考慮如何使用這項(xiàng)技術(shù):例如,如果使用一種面部識(shí)別算法來強(qiáng)化一種有偏見的警務(wù)策略,那么即使它再完美,可能也是不道德的。“這意味著要審慎地意識(shí)到人類自身的歷史,以及各種有意或無意的偏見?!币良{姆說。

對(duì)人工智能的使用采取審慎的態(tài)度,往往會(huì)使企業(yè)不再采用許多技術(shù)供應(yīng)商提供的現(xiàn)成解決方案和預(yù)先訓(xùn)練好的人工智能模型。對(duì)于預(yù)先訓(xùn)練好的人工智能模型,我們很難足夠深入地了解這套系統(tǒng)的關(guān)鍵要素——它究竟使用了什么數(shù)據(jù)、數(shù)據(jù)來自哪里、數(shù)據(jù)可能會(huì)帶來什么偏見或倫理問題。同樣重要的是,企業(yè)很難確切地發(fā)現(xiàn)這種人工智能模式可能會(huì)失敗的地方和方式。

我最喜歡的例子是IBM的“面孔多樣性”數(shù)據(jù)集。它的出發(fā)點(diǎn)是好的:太多用于建立面部識(shí)別系統(tǒng)的公共面部數(shù)據(jù)集沒有足夠的黑人或拉丁裔圖像。而在這些系統(tǒng)中發(fā)現(xiàn)的注釋也往往會(huì)強(qiáng)化種族和性別刻板印象。為了解決這個(gè)問題,IBM在2019年1月發(fā)布了一個(gè)包含100萬張人臉的開源數(shù)據(jù)集。如此一來,這個(gè)數(shù)據(jù)集應(yīng)該更加多樣化,其問題標(biāo)簽也應(yīng)該少得多吧。

聽起來都不錯(cuò),對(duì)吧?哪家公司不愿意使用這種更加多樣化的數(shù)據(jù)集來訓(xùn)練自己的面部識(shí)別系統(tǒng)呢?好吧,但有一個(gè)問題:IBM是在未經(jīng)允許的情況下,從人們的Flickr賬戶中抓取圖片來創(chuàng)建這個(gè)數(shù)據(jù)集的。因此,盲目采用這個(gè)新數(shù)據(jù)集的用戶確實(shí)避免了一個(gè)人工智能倫理問題,卻在無意中陷入了另一個(gè)問題。

遵循伊納姆所說的三大支柱,還會(huì)產(chǎn)生另一個(gè)結(jié)果:人工智能項(xiàng)目不能操之過急。運(yùn)行一個(gè)以人為中心的設(shè)計(jì)流程,并想清楚所有與可信賴性和倫理相關(guān)的潛在問題,是需要時(shí)間的。但好消息是,伊納姆說,由此生成的系統(tǒng)遠(yuǎn)比快速投入生產(chǎn)的系統(tǒng)更有可能真正地實(shí)現(xiàn)其目標(biāo)。

伊納姆指出,為了滿足這三大支柱,必須讓持有不同觀點(diǎn)的人參與進(jìn)來——參與者不僅要有不同的種族、性別和個(gè)人背景,還要讓組織內(nèi)的不同職能部門參與其中?!斑@必須是一個(gè)跨學(xué)科的群體?!彼f。

很多時(shí)候,構(gòu)建人工智能軟件的團(tuán)隊(duì)都嚴(yán)重缺乏這種多樣性。相反,管理層只是簡單地要求工程部門構(gòu)建一款能夠滿足某種商業(yè)目的的人工智能工具,而在人工智能系統(tǒng)的概念化和測(cè)試階段,公司的其他部門幾乎沒有任何投入。沒有多樣化的團(tuán)隊(duì),就很難搞清楚該問什么問題——無論是關(guān)于算法偏見,還是關(guān)于法律和監(jiān)管問題——更遑論你是否獲得了好的答案。

就在伊納姆向我解釋的時(shí)候,我突然想起了那句古老的格言:“戰(zhàn)爭太重要了,不能完全委之于將軍?!编?,事實(shí)證明,人工智能太重要了,不能完全委之于工程師。(財(cái)富中文網(wǎng))

譯者:任文科

Back in April, when the pandemic was at its peak in many parts of the world, I spoke to Ahmer Inam, the chief A.I. officer at Pactera Edge, a technology consulting firm in Redmond, Washington. At the time, Inam was focused on how the pandemic was wreaking havoc with A.I. models trained from historical data.

Last week, I caught up with Inam again. Lately, he’s been thinking a lot about why A.I. projects so often fail, especially in large organizations. To Inam, the answer to this problem—and to many others surrounding the technology—is something called “Mindful A.I.”

“Being mindful is about being intentional,” Inam says. “Mindful A.I. is about being aware and purposeful about the intention of, and emotions we hope to evoke through, an artificially intelligent experience.”

OK, I admit that when he said that, I thought, it sounds kinda out there, like maybe Inam should lay off the edibles for a week or two—and Mindful A.I. has the ring of a gimmicky catchphrase. But the more Inam explained what he meant, the more I began to think he was on to something. (And just to be clear, Inam did not coin the term Mindful A.I.. Credit should primarily go to Orvetta Sampson, the principal creative director at Microsoft, and De Kai, a professor at the University of California at Berkeley.)

Inam is arguing for a first-principles approach to A.I. He says that too often organizations go wrong because they adopt A.I. for all the wrong reasons: because the C-suite wrongly believes it’s some sort of technological silver bullet that will fix a fundamental problem in the business, or because the company is desperate to cut costs, or because they’ve heard competitors are using A.I. and they are afraid of being left behind. None of these are, in and of themselves, good reasons to adopt the technology, Inam says.

Instead, according to Inam, three fundamental pillars should undergird any use of A.I.

? First, it should “human-centric.” That means thinking hard about what human challenge the technology is meant to be solving and also thinking very hard about what the impact of the technology will be, both on those who will use it—for instance, the company’s employees—and those who will be affected by the output of any software, such as customers.

? Second, A.I. must be trustworthy. This pillar encompasses ideas like explainability and interpretability—but it goes further, looking at whether all stakeholders in a business are going to believe that the system is arriving at good outputs.

? Third, A.I. must be ethical. This means scrutinizing where the data used to train an A.I. system comes from and what biases exist in that data. But it also means thinking hard about how that technology will be used: even a perfect facial recognition algorithm, for instance, might not be ethical if it is going to be used to reinforce a biased policing strategy. “It means being mindful and aware of our own human histories and biases that are intended or unintended,” Inam says.

A mindful approach to A.I. tends to lead businesses away from adopting off-the-shelf solutions and pre-trained A.I. models that many technology providers offer. With pre-trained A.I. models, it’s simply too difficult to get enough insight into critical elements of such systems—exactly what data was used, where it came from, and what biases or ethical issues it might present. Just as important, it can be difficult for a business to find out exactly where and how that A.I. model might fail.

My favorite example of this is IBM's "Diversity in Faces" dataset. The intention was a good one: Too many public datasets of faces being used to build facial-recognition systems didn't have enough images of Black or Latino individuals. And too often the annotations found in these systems can reinforce racial and gender stereotypes. In an effort to solve this problem, in January 2019, IBM released an open-source dataset of 1 million human faces that were supposed to be far more diverse and with much less problematic labels.

All sounds good, right? What company wouldn't want to use this more diverse dataset to train its facial-recognition system? Well, there was just one problem: IBM had created the dataset by scraping images from people's Flickr accounts without their permission. So users who blindly adopted the new dataset were unwittingly trading one A.I. ethics problem for another.

Another consequence of Inam's three pillars is that A.I. projects can't be rushed. Running a human-centric design process and thinking through all the potential issues around trustworthiness and ethics takes time. But the good news, Inam says, is that the resulting system is far more likely to actually meet its goals than one that is sped into production.

To meet all three pillars, Inam says it is essential to involve people with diverse perspectives, both in terms of race, gender and personal backgrounds, but also in terms of roles within the organization. “It has to be interdisciplinary group of people,” he says.

Too often, the teams building A.I. software sorely lack such diversity. Instead, engineering departments are simply told by management to build an A.I. tool that fulfills some business purpose, with little input during the conceptualization and testing phases from other parts of the company. Without diverse teams, it can be hard to figure out what questions to ask—whether on algorithmic bias or legal and regulatory issues—let alone whether you've got good answers.

As Inam was speaking, I was reminded of that old adage, “War is too important to be the left to the generals.” Well, it turns out, A.I. is too important to be left to the engineers.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載、摘編、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評(píng)論
評(píng)論

撰寫或查看更多評(píng)論

請(qǐng)打開財(cái)富Plus APP

前往打開
熱讀文章
亚洲午夜福利国产门事件| 国产xxxx视频在线观看| 久久久久亚洲国产AV麻豆| 野花视频在线观看免费观看最新| 国色天香天天影院综合网| 大陆少妇无码在线观看| 日本精品一区二区在线观看| 色多多A级毛片免费看| 久国产三级无码内射在线看| 日韩AV一区二区三区免费看| 亚洲va国产日韩欧美精品色婷婷| 国产成人AAAAA级毛片| 亚洲国产精品二区三区| 午夜在线不卡激情| 国产在线精品一区二区中文| 亚洲乱妇熟女爽到高潮的片| 无码亚洲成a人片在线观看无码| 欧美日韩视频一区二区三区。| 日日摸夜夜添夜夜添高潮喷水| 毛片免费视频肛交颜射免费视频| 精品少妇人妻AV无码专区不卡| 亚欧免费无码Aⅴ在线观看网站| 日韩精品内射视频免费观看| 亚洲综合色AAA成人无码| 国产精品欧美亚洲韩国日本久久| 免费人成网上在线观看| 全免费A级毛片免费看无码| 亚洲精品欧美精品日韩精品| 青青青久热国产精品视频| 日本韩国黄色一区二区三区| 久久综合亚洲鲁鲁五月天欧美 | 午夜精品久久久久久久99热蜜桃| 18禁在线无遮挡免费观看网站| 国产日韩在线视频| 国产成人AV无码永久免费一线天| 亚洲国产aⅴ成人精品无码| 无码久久综合久中文字幕| 国产精品99精品久久免费| 国产精品无码久久av不卡| 国产亚洲精品高清在线| 亚洲成av人片在线观看天堂无|